Unified LASSO Estimation via Least Squares Approximation
نویسنده
چکیده
We propose a method of least squares approximation (LSA) for unified yet simple LASSO estimation. Our general theoretical framework includes ordinary least squares, generalized linear models, quantile regression, and many others as special cases. Specifically, LSA can transfer many different types of LASSO objective functions into their asymptotically equivalent least-squares problems. Thereafter, the standard asymptotic theory can be established and the LARS algorithm can be applied. In particular, if the adaptive LASSO penalty and a BIC-type tuning parameter selector are used, the resulting LSA estimator can be as efficient as oracle. Extensive numerical studies confirm our theory.
منابع مشابه
Sparse Bayes estimation in non-Gaussian models via data augmentation
In this paper we provide a data-augmentation scheme that unifies many common sparse Bayes estimators into a single class. This leads to simple iterative algorithms for estimating the posterior mode under arbitrary combinations of likelihoods and priors within the class. The class itself is quite large: for example, it includes quantile regression, support vector machines, and logistic and multi...
متن کاملAdaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
The purpose of this article is to provide an adaptive estimator of the baseline function in the Cox model with high-dimensional covariates. We consider a two-step procedure : first, we estimate the regression parameter of the Cox model via a Lasso procedure based on the partial log-likelihood, secondly, we plug this Lasso estimator into a least-squares type criterion and then perform a model se...
متن کاملShrinkage Estimation of Common Breaks in Panel Data Models via Adaptive Group Fused Lasso∗
In this paper we consider estimation and inference of common breaks in panel data models via adaptive group fused lasso. We consider two approaches — penalized least squares (PLS) for firstdifferenced models without endogenous regressors, and penalized GMM (PGMM) for first-differenced models with endogeneity. We show that with probability tending to one both methods can correctly determine the ...
متن کاملAsymptotic Analysis of High-dimensional Lad Regression with Lasso
The Lasso is an attractive approach to variable selection in sparse, highdimensional regression models. Much work has been done to study the selection and estimation properties of the Lasso in the context of least squares regression. However, the least squares based method is sensitive to outliers. An alternative to the least squares method is the least absolute deviations (LAD) method which is...
متن کاملEstimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including ...
متن کامل